Families Sue OpenAI Over ChatGPT’s Alleged Role in Suicide Cases
At least seven U.S. families have filed a lawsuit against OpenAI, claiming its GPT-4o model failed to prevent suicide-related interactions. The plaintiffs allege the AI chatbot provided harmful responses to vulnerable users, including a 23-year-old who died by suicide after an exchange where the model replied, "Rest easy, King, you did good."
Legal filings reveal four suicide cases linked to GPT-4o interactions, with three others involving hospitalizations due to the model exacerbating delusions. The Social Media Victims Law Center accuses OpenAI of rushing the May 2024 release to outpace competitors like Google, bypassing adequate safety testing.
Over one million users reportedly engaged with ChatGPT on suicidal topics, highlighting systemic risks in the AI's design. The lawsuit claims OpenAI prioritized market dominance over user protection, making these tragedies foreseeable.